DNA-Encoded Library (DEL) technology has enabled significant advances in hit identification by enabling efficient testing of combinatorially-generated molecular libraries. DEL screens measure protein binding affinity though sequencing reads of molecules tagged with unique DNA-barcodes that survive a series of selection experiments. Computational models have been deployed to learn the latent binding affinities that are correlated to the sequenced count data; however, this correlation is often obfuscated by various sources of noise introduced in its complicated data-generation process. In order to denoise DEL count data and screen for molecules with good binding affinity, computational models require the correct assumptions in their modeling structure to capture the correct signals underlying the data. Recent advances in DEL models have focused on probabilistic formulations of count data, but existing approaches have thus far been limited to only utilizing 2-D molecule-level representations. We introduce a new paradigm, DEL-Dock, that combines ligand-based descriptors with 3-D spatial information from docked protein-ligand complexes. 3-D spatial information allows our model to learn over the actual binding modality rather than using only structured-based information of the ligand. We show that our model is capable of effectively denoising DEL count data to predict molecule enrichment scores that are better correlated with experimental binding affinity measurements compared to prior works. Moreover, by learning over a collection of docked poses we demonstrate that our model, trained only on DEL data, implicitly learns to perform good docking pose selection without requiring external supervision from expensive-to-source protein crystal structures.
translated by 谷歌翻译
In this paper, we assess the viability of transformer models in end-to-end InfoSec settings, in which no intermediate feature representations or processing steps occur outside the model. We implement transformer models for two distinct InfoSec data formats - specifically URLs and PE files - in a novel end-to-end approach, and explore a variety of architectural designs, training regimes, and experimental settings to determine the ingredients necessary for performant detection models. We show that in contrast to conventional transformers trained on more standard NLP-related tasks, our URL transformer model requires a different training approach to reach high performance levels. Specifically, we show that 1) pre-training on a massive corpus of unlabeled URL data for an auto-regressive task does not readily transfer to binary classification of malicious or benign URLs, but 2) that using an auxiliary auto-regressive loss improves performance when training from scratch. We introduce a method for mixed objective optimization, which dynamically balances contributions from both loss terms so that neither one of them dominates. We show that this method yields quantitative evaluation metrics comparable to that of several top-performing benchmark classifiers. Unlike URLs, binary executables contain longer and more distributed sequences of information-rich bytes. To accommodate such lengthy byte sequences, we introduce additional context length into the transformer by providing its self-attention layers with an adaptive span similar to Sukhbaatar et al. We demonstrate that this approach performs comparably to well-established malware detection models on benchmark PE file datasets, but also point out the need for further exploration into model improvements in scalability and compute efficiency.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
近年来,卷积神经网络(CNN)证明了它们在许多领域解决问题的能力,并且以前无法进行准确性。但是,这带有广泛的计算要求,这使得普通CPU无法提供所需的实时性能。同时,FPGA对加速CNN推断的兴趣激增。这是由于他们有能力创建具有不同级别的并行性的自定义设计。此外,与GPU相比,FPGA提供每瓦的性能更好。基于FPGA的CNN加速器的当前趋势是实现多个卷积层处理器(CLP),每个处理器都针对一层层量身定制。但是,CNN体系结构的日益增长的复杂性使得优化目标FPGA设备上可用的资源,以使最佳性能更具挑战性。在本文中,我们提出了CNN加速器和随附的自动设计方法,该方法采用元启发式学来分区可用的FPGA资源来设计多CLP加速器。具体而言,提出的设计工具采用模拟退火(SA)和禁忌搜索(TS)算法来查找所需的CLP数量及其各自的配置,以在给定的目标FPGA设备上实现最佳性能。在这里,重点是关键规格和硬件资源,包括数字信号处理器,阻止RAM和芯片内存储器带宽。提出了使用四个众所周知的基准CNN的实验结果和比较,表明所提出的加速框架既令人鼓舞又有前途。基于SA-/TS的多CLP比在加速Alexnet,Squeezenet 1.1,VGGNET和Googlenet架构上的最新单个/多CLP方法高1.31x-2.37倍高2.37倍。和VC709 FPGA板。
translated by 谷歌翻译
本文提议使用修改的完全连接层转移初始化,以进行1900诊断。卷积神经网络(CNN)在图像分类中取得了显着的结果。但是,由于图像识别应用程序的复杂性,培训高性能模型是一个非常复杂且耗时的过程。另一方面,转移学习是一种相对较新的学习方法,已在许多领域使用,以减少计算来实现良好的性能。在这项研究中,Pytorch预训练的模型(VGG19 \ _bn和WideresNet -101)首次在MNIST数据集中应用于初始化,并具有修改的完全连接的层。先前在Imagenet中对使用的Pytorch预培训模型进行了培训。提出的模型在Kaggle笔记本电脑中得到了开发和验证,并且在网络培训过程中没有花费巨大的计算时间,达到了99.77%的出色精度。我们还将相同的方法应用于SIIM-FISABIO-RSNA COVID-19检测数据集,并达到80.01%的精度。相比之下,以前的方法在训练过程中需要大量的压缩时间才能达到高性能模型。代码可在以下链接上找到:github.com/dipuk0506/spinalnet
translated by 谷歌翻译
近年来,由于深度学习体系结构的有希望的进步,面部识别系统取得了非凡的成功。但是,当将配置图像与额叶图像的画廊匹配时,它们仍然无法实现预期的准确性。当前方法要么执行姿势归一化(即额叶化)或脱离姿势信息以进行面部识别。相反,我们提出了一种新方法,通过注意机制将姿势用作辅助信息。在本文中,我们假设使用注意机制姿势参加的信息可以指导剖面面上的上下文和独特的特征提取,从而进一步使嵌入式域中的更好表示形式学习。为了实现这一目标,首先,我们设计了一个统一的耦合曲线到额定面部识别网络。它通过特定于类的对比损失来学习从面孔到紧凑的嵌入子空间的映射。其次,我们开发了一个新颖的姿势注意力块(PAB),以专门指导从剖面面上提取姿势 - 不合稳定的特征。更具体地说,PAB旨在显式地帮助网络沿着频道和空间维度沿着频道和空间维度的重要特征,同时学习嵌入式子空间中的歧视性但构成不变的特征。为了验证我们提出的方法的有效性,我们对包括多PIE,CFP,IJBC在内的受控和野生基准进行实验,并在艺术状态下表现出优势。
translated by 谷歌翻译
在本文中,我们试图在抽象嵌入空间中绘制额叶和轮廓面图像之间的连接。我们使用耦合编码器网络利用此连接将额叶/配置文件的面部图像投影到一个常见的潜在嵌入空间中。提出的模型通过最大化面部两种视图之间的相互信息来迫使嵌入空间中表示的相似性。拟议的耦合编码器从三个贡献中受益于与极端姿势差异的匹配面。首先,我们利用我们的姿势意识到的对比学习来最大程度地提高身份额叶和概况表示之间的相互信息。其次,由在过去的迭代中积累的潜在表示组成的内存缓冲区已集成到模型中,因此它可以比小批量大小相对较多的实例。第三,一种新颖的姿势感知的对抗结构域适应方法迫使模型学习从轮廓到额叶表示的不对称映射。在我们的框架中,耦合编码器学会了扩大真实面孔和冒名顶替面部分布之间的边距,这导致了相同身份的不同观点之间的高度相互信息。通过对四个基准数据集的广泛实验,评估和消融研究来研究拟议模型的有效性,并与引人入胜的最新算法进行比较。
translated by 谷歌翻译
我们检查了通过直播(OTA)聚合的联合学习(FL),移动用户(MUS)旨在借助聚合本地梯度的参数服务器(PS)在全球模型上达成共识。在OTA FL中,MUS在每个训练回合中使用本地数据训练他们的模型,并以未编码的方式使用相同的频带同时传输其梯度。根据超级梯度的接收信号,PS执行全局模型更新。尽管OTA FL的通信成本显着降低,但它容易受到不利的通道影响和噪声的影响。在接收器侧采用多个天线可以减少这些效果,但是对于远离PS的用户来说,路径损失仍然是一个限制因素。为了改善此问题,在本文中,我们提出了一种基于无线的层次FL方案,该方案使用中间服务器(ISS)在MUS更密集的区域形成簇。我们的计划利用OTA群集聚合与MUS与其相应的IS进行交流,而OTA全球聚合从ISS到PS。我们提出了针对所提出算法的收敛分析,并通过对使用ISS的衍生分析表达式和实验结果的数值评估显示,与单独使用较少的传输功率相比,利用ISS的结果比单独的OTA FL具有更快的收敛性和更好的性能。我们还使用不同数量的群集迭代以及不同数据集和数据分布来验证性能的结果。我们得出的结论是,群集聚集的最佳选择取决于MUS和集群之间的数据分布。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译